Overview

The imprecise nature of 3D printing limits this technology’s use beyond prototyping. In order to reliably use this technology for aerospace applications and aid certification of printed parts, the quality and repeatability of the parts must be improved. Much of the difficulty in obtaining high quality printed parts lies in finding optimum printing parameters. Currently, this requires much trial and error and expertise in the 3D printing process and materials engineering.

We wish to create machine learning models that take in object parameters from the CAD design and optimize the printing parameters throughout the print to increase the printed part’s fidelity to the CAD design. More specifically, in this internship we aim to: collect the data, train models, and use the predicted optimum local printing parameters to print a part with high fidelity and repeatability.

This is the workflow for reproducing the work by Kevin Hunt and Austin Ebel (and classification work by Sean Zylich and Evan Rose).

Collecting the Data

Overview

Make CAD model

The part we have chosen to print is “3 sides of a cube”. This is a very simple 3D geometry. OpenSCAD to create the model.

Export as an .STL

Refine the model with NetFabb

This .STL file has very few vertices. We decided to use the vertices (X,Y,Z) as our data points of interest as obtaining them for any part is rather easy (just parse the STL file). However, we need more vertices than OpenSCAD created for this part. More vertices can be obtained by refining the model using NetFabb.

NetFabb has an option to import a mesh, and, for our purposes, define a maximum edge length to refine the model. We used a student version of this software for this project.

A step-by-step process goes as follows:

  • File -> Open Project
  • Prepare -> Repair Part

Under the new menu select:

  • Mesh Edit -> Refine Triangle Mesh

Setting the first value, Max. Edge Length, to 0.1mm (or the desired max).

It’s important afterwards to click “Apply Repair” in the bottom left and remove the old part.

Now, go to:

  • File -> Export Path -> as STL (ASCII)

as the default STL is a binary file which isn’t compatible for the next part in our workflow.

Refining the model with NetFabb with 2mm maximum edge length (not 0.1mm).

Refining the model with NetFabb with 2mm maximum edge length (not 0.1mm).

The reason we have chosen 0.1mm for the maximum edge length is to ensure that, later on, every gcode segment maps almost exactly to an STL vertex.

Use Libigl

In addition to XYZ coordinates, we also expect the part geometry to affect optimum printing parameters. Thus, we wanted to obtain geometric information for each vertex (XYZ data point).

A software package known as Libigl was incredibly useful for this part, as it was designed specifically to obtain geometric data from meshes. They provide an in-depth tutorial and installation instructions here:

http://libigl.github.io/libigl/tutorial/

with their official GitHub page here:

https://github.com/libigl/libigl/

  • Note: We did not install this on Windows for obvious reasons - and also Libigl uses CMake to create executable files, so Windows required a Visual Studio compiler which is both a pain to set up and potentially not free. We recommend running on MacOS/Linux for this part.

Within Libigl, we looked specifically at the vertex normals, Gaussian Curvature, and Discrete Mean Curvature (along with X, Y, Z coordinates). While Libigl is written in C++, it has bindings in Python which made working these values into the training data much easier later on.

Libigl has done a fantastic job at minimizing the amount of code you need to write. In addition, there are Python tutorials on their Github for each of the three types of geometric data we’ve used.

For example, code to calculate and view the vertex normals looks as follows:

import sys, os  
  
# Add the igl library to the module's search path
sys.path.insert(0, '/Users/abebel/libigl/python/')
import pyigl as igl # Libigl's python bindings  
  
stl_path = '/Users/abebel/Desktop/refined_cube.stl'  
  
# Libigl uses the eigen class:
# More info found here: https://eigen.tuxfamily.org/dox/group__TutorialMatrixClass.html
OV = igl.eigen.MatrixXd()
OF = igl.eigen.MatrixXi()  
  
def compute_vertex_normals():
    # Syntax for computing normals
    igl.per_vertex_normals(V, F, igl.PER_VERTEX_NORMALS_WEIGHTING_TYPE_AREA, N_vertices)  
      
    return N_vertices
  
if __name__ == "__main__":  
  
    V = igl.eigen.MatrixXd()
    F = igl.eigen.MatrixXi()
    SVI = igl.eigen.MatrixXi()
    SVJ = igl.eigen.MatrixXi()  
    N_vertices = igl.eigen.MatrixXd()
  
    # Read in mesh and remove duplicate vertices, since multiple triangles
    # can contain the same vertex - reducing redundant training data
    igl.read_triangle_mesh(stl_path, OV, OF)
    igl.remove_duplicate_vertices(OV, OF, 0.0, V, SVI, SVJ, F)  
  
    N_vertices = compute_vertex_normals()
  
    # Vertex normals will be stored in N_vertices
    print(N_vertices)

Code for all the geometric data is also below:

import sys, os  
  
# Add the igl library to the modules search path
sys.path.insert(0, '/Users/abebel/libigl/python/')
import pyigl as igl  
  
stl_path = '/Users/abebel/Desktop/refined_cube.stl'  
  
OV = igl.eigen.MatrixXd()
OF = igl.eigen.MatrixXi()    
  
# Compute vertex normals  
def compute_vertex_normals():
    igl.per_vertex_normals(V, F, igl.PER_VERTEX_NORMALS_WEIGHTING_TYPE_AREA, N_vertices)
    
    return N_vertices
  
# Compute Gaussian Curvature  
def compute_gaussian():
    K = igl.eigen.MatrixXd()
    igl.gaussian_curvature(V, F, K)  
  
    return K  
# Compute Discrete Mean Curvature - in Libigl's tutorial
def compute_dmc():  
  
    HN = igl.eigen.MatrixXd()
    L = igl.eigen.SparseMatrixd()
    M = igl.eigen.SparseMatrixd()
    Minv = igl.eigen.SparseMatrixd()  
  
    igl.cotmatrix(V, F, L)
    igl.massmatrix(V, F, igl.MASSMATRIX_TYPE_VORONOI, M)  
  
    igl.invert_diag(M, Minv)  
  
    # Laplace-Beltrami of position
    HN = -Minv * (L * V)  
  
    # Extract magnitude as mean curvature
    H = HN.rowwiseNorm()  
  
    # Compute curvature directions via quadric fitting
    PD1 = igl.eigen.MatrixXd()
    PD2 = igl.eigen.MatrixXd()  
  
    PV1 = igl.eigen.MatrixXd()
    PV2 = igl.eigen.MatrixXd()  
  
    igl.principal_curvature(V, F, PD1, PD2, PV1, PV2)  
  
    # Mean curvature
    H = 0.5 * (PV1 + PV2)  
  
    return H  
  
  
if __name__ == "__main__":  
  
    V = igl.eigen.MatrixXd()
    F = igl.eigen.MatrixXi()
    SVI = igl.eigen.MatrixXi()
    SVJ = igl.eigen.MatrixXi()  
    N_vertices = igl.eigen.MatrixXd()   
    
    igl.read_triangle_mesh(stl_path, OV, OF)
    igl.remove_duplicate_vertices(OV, OF, 0.0, V, SVI, SVJ, F)  
  
    N_vertices = compute_vertex_normals()
    K = compute_gaussian() # K stores Gaussian Curvature for each vertex
    H = compute_dmc() # H stores DMC values for each vertex
  
    C = igl.eigen.MatrixXd()
    igl.jet(H, True, C) # Convert into RGB values  
  
    # Libigl's viewer
    viewer = igl.glfw.Viewer()
    viewer.data().set_mesh(V, F)
    viewer.data().set_colors(C)
    viewer.launch()

Set up to view the Discrete Mean Curvature values, what appears should look something like this:

Discrete Mean Curvature Visualization (on 2mm edge length mesh)

Discrete Mean Curvature Visualization (on 2mm edge length mesh)

Other Geometric Information

In addition to the geometric data obtained from Libigl, we felt angles between the Z axis, as well as print direction might also be important to include. Angles between the Z axis might be able to better tell us about overhangs, and print direction (defined as a vector facing out of the front of the printer) might give us information about how orientation affects quality.

The angles found between the Z axis can be seen in the following picture:

 

We can use the vertex normals found through Libigl to find both θ and ϕ and write functions that return these angles for every vertex in the STL file.

The print angle was found just by using the standard angle between two vectors equation:

α=cos1(uv|u||v|)

Where v is the vector (0,1,0) - facing out of the print bed. In the code, compute_theta_vert() computes θ for each vertex, and compute_phi_vert() computes ϕ for each vertex.

The code for this looks as follows, and can also be visualized using Libigl’s viewer:

  • Note: N_vertices is still found from Libigl, so running this code separate from the above Libigl code will result in an error. We have all the geometric data calculation functions in one file.
import numpy as np
import math  
  
# Add the igl library to the modules search path
sys.path.insert(0, "/Users/abebel/libigl/python/")
import pyigl as igl
from iglhelpers import e2p  
  
# Compute theta from vertical (shown in picture)
def compute_theta_vert():    
    
    theta_vert = igl.eigen.MatrixXd(len(numpy_vertices), 1)  
  
    # Iterate over entire array calculating theta for each vertex
    for i in range(len(numpy_vertices)):
        if numpy_vertices[i][1] == 0:
            theta_vert[i] = 0
        else:
            theta_vert[i] = math.atan2(numpy_vertices[i][1], numpy_vertices[i][0])  
  
    return theta_vert  
  
# Compute phi from vertical (shown in picture)
def compute_phi_vert():  
  
    phi_vert = igl.eigen.MatrixXd(len(numpy_vertices), 1)  
    for i in range(len(numpy_vertices)):
        phi_vert[i] = math.acos(numpy_vertices[i][2] / np.linalg.norm(numpy_vertices[i]))  
  
    return phi_vert  
  
# Compute print angle according to above equation
def compute_angle_print():  
  
    print_angle = np.array([0, 1, 0])
    angle_print = igl.eigen.MatrixXd(len(numpy_vertices), 1)  
  
    for i in range(len(numpy_vertices)):
        angle_print[i] = math.acos(np.dot(print_angle, numpy_vertices[i]) / np.linalg.norm(numpy_vertices[i]))
  
    return angle_print  
      
if __name__ == "__main__":  
  
    numpy_vertices = e2p(N_vertices) # Libigl's helper function to convert eigen matrices into numpy arrays
    
    tv = compute_theta_vert()
    pv = compute_phi_vert()
    ap = compute_angle_print()  
      
    C = igl.eigen.MatrixXd()
    igl.jet(tv, True, C) # Convert into RGB values  
      
    viewer = igl.glfw.Viewer()
    viewer.data().set_mesh(V, F)
    viewer.data().set_colors(C)
    viewer.launch()

Later in the manual, we’ll talk about how these values are stored in our dataframes and how the training data is actually formatted.

Choose Printing Parameters

The parameters of interest are the Nozzle Temperature, Bed Temperature, Print Speed, Extrusion Multiplier and Fan Speed. Nozzle temperature ranges were chosen to be 225 C to 245 C. In Rao 2015, it was recommended not to go below 225 to prevent clogging. We have also noticed material burning above 245. The ideal bed temperature is suggested to be 110 C. Therefore, we are testing 100-120 C. Print speed was chosen to vary between 20 ad 80 mm/sec based on print speeds suggested here. Rao 2015, suggested the extrusion multiplier does best around 100%, thus we’ve chosen to vary between 90-110%. The Fan Speed multiplier was chosen to be either 30, 45, or 80% based on John Gardner’s suggestions.

The parameter ranges we are interested in testing for Natural ABS:

nozzle.temp <- seq(225,245,10)                 
bed.temp <- seq(100,120,10)                    
print.speed <- seq(20,80,20)                  
extrusion.multiplier <- seq(90,110,10)
fan.speed.percent <- c(10,45,80)

Combine these into possible combinations:

parameter.list <- list(Nozzle.Temp=nozzle.temp, Bed.Temp=bed.temp, 
                       Print.Speed=print.speed, Extrusion.Multiplier =extrusion.multiplier, 
                       Fan.Speed.Percent=fan.speed.percent)
all.options <- expand.grid(parameter.list)

There are 324 possible combinations.

Randomly order the rows

set.seed(42)
randomized.all.options <- all.options[sample(nrow(all.options),nrow(all.options)),]
# reset row numbers
rownames(randomized.all.options) <- NULL
# save these combinations as a .csv
write.table("varied_parameters_print_order.txt", quote=F, x = randomized.all.options, col.names=FALSE)
Nozzle.Temp Bed.Temp Print.Speed Extrusion.Multiplier Fan.Speed.Percent
1 245 120 20 110 80
2 245 110 40 110 80
3 245 100 60 110 10
4 245 110 40 100 80
5 235 120 60 110 45
6 225 110 60 100 45

We choose to randomly order the prints. If you look at the 3D representation of the problem surface, we could imagine the red to indicate an error while printing and the dark blue to indicate the highest fidelity. Imagine we do not randomly select the printing parameters, and our problem surface is built in a straight line up the hill leading to the worst print conditions. By randomly sampling, we are hoping to approximate the surface of the problem prior to completing all 324 possible prints. 3D representation of a problem surface.

Note: a thermocouple was used to determine the actual nozzle temperature in comparison to the pronterface read-out.
Pronterface.Temperature.Setting Thermocouple.Reading Difference
1 260 257 3.0
2 255 252 3.0
3 250 247 3.0
4 245 243 2.0
5 240 238 2.0
6 235 234 1.0
7 230 229 1.0
8 225 223 2.0
9 220 219 1.0
10 215 214 1.0
11 210 209 1.0
12 205 204 1.0
13 200 200 0.0
14 195 195.1 -0.1
15 190 190.2 -0.2
16 185 185.3 -0.3
17 180 180.5 -0.5
18 175 175.3 -0.3

Creating the training data - Algorithms and Usage Tips

As our end goal is to optimize parameters along the toolpath, but we’re gathering geometric data about the STL file, we needed an efficient way to convert STL vertex information into Gcode segment information. This involved several steps talked about below.

Correctly Align the Gcode and STL Files

Our first challenge can be perfectly summed up in the picture below.

Rather hard to see, but STL vertices on the top and right side go beyond that of the toolpath. This is an issue as Gcode segments that lie on the corner might not receive the correct ‘corner-like’ information from Libigl.

The toolpath is adjusted so that, with how the material is deposited, it does not exceed our 50mm x 50mm x 50mm design. So the actual dimensions as specified by the toolpath are 49.5mm x 49.5mm x 49.4mm. In addition, the walls of our original STL file were 1mm wide, however Cura creates a toolpath along the walls of only 0.5mm.

We expect this will be less of an issue for less sharp and thin models. While our ‘3 sides of a cube’ is simple and easy to print, it’s somewhat of a double-edged sword as it also means in order to perfectly capture the geometries, we need to exactly align the STL file and Gcode segments.

We remedied this issue by simply adjusting the CAD model to reflect the Gcode path. Initially, we scaled all of the X, Y, Z coordinates to meet the Gcode toolpath, however this still left the walls ~0.5mm apart. We realize this isn’t the most generalizable option, however, again, we feel this issue is exacerpated in our model. In the future, it might be worth looking into better ways to match the toolpath with the STL file.

Secondly, our STL file needed to have the same coordinate system as the Gcode file. This way, we could directly associate geometric information from the STL file with Gcode segments. This was done by finding the minimum X, Y, Z coordinates of both the STL and Gcode files and finding the difference between each of them. Then, we added that difference to the STL file to obtain the same coordinate system.

  • Note: If you rotate the model within Cura, you need to export it both as Gcode as well as STL and realign coordinate systems based on this new STL file. The ‘add an offset’ method will not work if you have an old STL file with a rotated Gcode toolpath. More information about this code can be found in the database section below.

Refine the model and obtain geometric information

Like we mentioned above, our STL model needed to be refined so that each vertex’s geometric data would fall extremely close to a gcode segment. Once refined to a max edge length of 0.1mm, there were around ~3,000,000 vertices in our STL file. Obtaining geometric data for all of the vertices took around 15 minutes.

Segment the Gcode

In addition to refining the STL model, modifications must be made to the Gcode to ensure there are a proper number of points to actually train the model on. Furthermore, there’s no point in having ~3,000,000 vertices of geometric data if only 500 get mapped to Gcode segments. In our ‘3 sides of a cube’, some Gcode segments spanned the entire 50mm wall, looking like this:

We decided it was best to break up these Gcode segments into roughly 2mm each, because that could accurately map geometric data while also not going beyond the accuracy that Sean/Evan’s Neural Network could classify. These changes were then written back into the Gcode like so:

Where at each segment, we have the ability to specify changes in parameters (ultimately predicted and smoothed from our own Machine Learning algorithm).

The code for this looks as follows, however needs to be updated taking into account our predictions from the model:

import math
nozzle_temperature = "245"
bed_temperature = "100"
print_speed = "60"
extrusion_multiplier = "110"
fan_speed_percent = "10"
segment_size = 2 #mm
offset = 0 # lines
        
with open("C:\\Users\\kahunt1\\Documents\\OpenSCAD\\gcodes\\3 sided cube\\print_3_relative_extrusion.gcode") as base_gcode:
    # load in the base gcode
    gcode_data = base_gcode.readlines()
    gcode_to_write = []
    # Set the overall bed temperature
    gcode_data[15+offset] = "M140 S" + bed_temperature + " ; start bed heating up\n"
    gcode_data[63+offset] = "M190 S" + bed_temperature + " ; wait for bed to reach printing temp\n"
    gcode_data[576+offset] = "M140 S" + bed_temperature + "\n"
    # Set First Extrusion Multiplier
    gcode_data[67+offset] = "M221 S100\n"
    # Set First Fan Speed Percent
    gcode_data[71+offset] = "M106 S" + str(round(255*int(fan_speed_percent)/100,1)) + '\n'
    # Set Print Speed and Extrusion Multiplier
    make_changes = False
    print_finished = False
    line_counter = 0
    previous_extrusion_multiplier = float(gcode_data[67+offset][-4:-1])
    X_prev = 0
    Y_prev = 0
    current_X = 0
    current_Y = 0
    for code_line in gcode_data:
        line_to_write = code_line
        # make sure outside of skirt
        # TO DO: change this to instead check type not skirt
        if code_line == ";TYPE:WALL-INNER\n":
            make_changes = True
        # make sure not at end
        if "M104 S0" in code_line:
            make_changes = False
        # store the current X and Y
        if "X" in code_line:
            current_X = float([x[1:] for x in code_line.split() if "X" in x][0])
        if " Y" in code_line:
            current_Y = float([y[1:] for y in code_line.split() if "Y" in y][0])
        #TO DO: prevent infill split up
        # begin changing the line if it's not the skirt or after the print
        if make_changes:
            # if the line is an extrusion and movement command
            if all(x in code_line for x in ['G1', 'X', 'Y', 'E']):
                # grab the amount of extrudate
                extrusion = float([e[1:] for e in code_line.split() if "E" in e][0])
                # start segmenting the line up
                # a^2 + b^2 = c^2
                extrusion_distance = math.sqrt((current_X-X_prev)**2+(current_Y-Y_prev)**2)
                # based on the segment sizes, how many whole segments will be needed
                number_of_new_segments = max(1,round(extrusion_distance/segment_size))
                # based on that many segments, how far do we have to step in the X and Y direction each time
                X_step_distance = (current_X-X_prev)/number_of_new_segments
                Y_step_distance = (current_Y-Y_prev)/number_of_new_segments
                # store the new coordinates for the segments
                new_X_coords = []
                new_Y_coords = []
                for i in range(0,number_of_new_segments-1):
                    new_X_coords.append(X_prev + X_step_distance*(i+1))
                    new_Y_coords.append(Y_prev + Y_step_distance*(i+1))
                new_X_coords.append(current_X)
                new_Y_coords.append(current_Y)
                # segment the extrusion amount up
                new_extrusion_amount = round(extrusion/number_of_new_segments,5)
                # write these new instructions out
                line_to_write = []
                for i in range(0,number_of_new_segments):
                    # calculate the new extrusion amount
                    # original*new flow/previous flow
                    #     new_extrude_amount = round(float(extruded_amount)*float(extrusion_multiplier)/previous_extrusion_multiplier,5)
                    # Set Fan Speed Percent
                    fan_speed_set = "M106 S" + str(int(255*float(fan_speed_percent)/100)) + ' ; Set Fan Speed Percent\n'
                    # Set Hot end temp
                    wait_hot_end = "M104 S" + nozzle_temperature + " ; Set Nozzle Temp and continue without waiting\n"
                    gcode_command = "G1 F" + str(int(print_speed)*60) + " X" + str(new_X_coords[i]) \
                    + " Y" + str(new_Y_coords[i]) + " E" + str(new_extrusion_amount) + "\n"
                    
                    line_to_write.extend([fan_speed_set, wait_hot_end, gcode_command])
        # store the previous X and Y
        X_prev = current_X
        Y_prev = current_Y
        gcode_to_write.extend(line_to_write)
    # write the new gcode to a file
    new_filename = "test_segmenting_no_temp_waiting_randomize_fan_speed.gcode"
    with open(new_filename, 'w') as new_gcode:
        new_gcode.writelines(gcode_to_write)

Database Optimizations - Pandas DataFrame

Quick aside: Originally, we were segmenting the STL file every 2mm, however realized that we were often associating vertex information multiple layers above/below that gcode segment with that Gcode segment. This is obviously not ideal, as we really wanted each Gcode segment to be mapped to a vertex on the same level as the previous segment.

This is where we decided to further refine our model to a 0.1mm max edge length resulting in the ~3,000,000 vertices (from ~14,000). Dealing with this much data meant rethinking how we were storing the data; moving away from arrays to a more database-like approach with the Pandas DataFrame. Information about this can be found here:

https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html

Essentially this just gave us a massive performance boost in creating our training data as now we were using data structure made for incredibly large amounts of data like our own. You can see the database structure in action in the code above.

Classify error types to vertices

Now we have a data set that looks like: Example Dataset.. This dataset contains 6,241 XYZ points on the outer surface of the 3 sides of a cube (chosen from NETFABB refinement) and their associated geometric properties (from Libigl) multiplied by the 324 prints. This results in 2.02208410^{6} data points. Each of these needs to be labelled with their error type: none, blob, warping, or delamination.

Sean and Evan are working on an image classifcation algorithm to automatically determine error type from pictures of the walls

Train Models to predict the Optimum Printing Parameters for each XYZ

Using the above dataset, we wish to train a machine learning model to predict the optimum printing parameters for each point on the part, given the printer XYZ, part XYZ and geometric properties.

Multi-Output approach

The approach of giving all the input and getting all of the optimum printing parameters out at the same time is called Multi-Output, Multi-Target, Multi-Variate, or Multi-Response regression when continuous values are predicted (there are different names and techniques when the problem is a classification problem). This approach becomes more complicated than fitting a separate model to each output, because there are expected relationships between the outputs. The machine learning techniques best used for this approach are support vector regression, regression tress and neural networks. A good review by Borchani et al (2015) outlines this problem and recent attempts to improve the techniques.

Scikit-learn has a few models that natively support mutli-output regression:

Single-Output approach

Alternatively, if we instead use printer XYZ, part XYZ, geometric properties and printing parameters as input and predict only the error type as output, we may have an easier time training a single-output classification model. There are various packages available for this type of prediction. Additionally, this technique could allow us to create a list of possible parameters that will yield no error for a particular point, instead of a single parameter combination that would be provided from the multi-output models suggested above.
This type of classification will yield a percentage representing the predicted likelihood of a classification. For example, let’s say we are looking at the model output for a given point (X=1, Y=1, Z=1) and various printing parameter combinations. The model would yield different predicted likelihoods of the none type error:
Nozzle.Temp Bed.Temp Print.Speed Extrusion.Multiplier Fan.Speed.Percent Predicted.Likelihood.None.Error
245 120 20 110 80 0.98
245 110 40 110 80 0.90
235 120 60 110 45 0.65
225 110 60 100 45 0.54
245 100 60 110 10 0.12
245 110 40 100 80 0.00

Here we can see the best predicted printing parameters is the first combination, but the second combination might also do well in printing. This option of being able to pick between parameter combinations that fall above some threshold prediction, might enable better smoothing logic when it’s time to print using the optimized local parameters.

A timing test using a pre-trained sklearn MLPRegressor model to predict over 2 million values completed the task in ~3 seconds.

Use Predicted Optimum Local Printing Parameters

Once we have the predicted optimum printing parameters, we will need to communicate those parameters to the 3D printer. This will be done through the gcode, but first we need to associate each segment of gcode with the correct printing parameters. Here you can see the STL points (black) overlaid on the toolpath (light blue). Note: The toolpath is not 50 mm x 50 mm x 50 mm. In fact, it goes 49.5 mm x 49.5 mm x 49.4 mm. Thus, the STL points were scaled to match the dimensions of the toolpath.

We wrote a python script to chop the toolpath up into 2 mm segments and associate those segments with the nearest vertex. Here we have colored the “gcode” by it’s associated vertex. This indicates that our mapping algorithm works as expected. Note: To produce this image, the gcode was cut up into 0.1 mm segments and mapped to the nearest vertex.

The segments of gcode nearby that share the same color would be printed with the same printing parameters.

Insert code for associating the segments with a vertex

Next, these printing parameters for each segment need to be communicated to the printer via gcode. Here is a script that takes in the original gcode, cuts up the lines, and rewrites the segmented gcode. All of the parameters of interest, besides Nozzle Temperature, can be incorporated instaneously into the print without compromising the printing process. The Nozzle temperature, however, can not be changed immediately and heating/cooling must be done in the background while the printing continues.

Once the data has been collected, the model has been trained and we have optimum local printing parameters, these two python scripts for splicing and matching the gcode with a vertex and rewriting the spliced gcode using the predicted parameters will be combined into one script.